16 research outputs found
Deep learning technology for predicting solar flares from (Geostationary Operational Environmental Satellite) data
YesSolar activity, particularly solar flares can have significant detrimental effects on both space-borne and grounds based systems and industries leading to subsequent impacts on our lives. As a consequence, there is much current interest in creating systems which can make accurate solar flare predictions. This paper aims to develop a novel framework to predict solar flares by making use of the Geostationary Operational Environmental Satellite (GOES) X-ray flux 1-minute time series data. This data is fed to three integrated neural networks to deliver these predictions. The first neural network (NN) is used to convert GOES X-ray flux 1-minute data to Markov Transition Field (MTF) images. The second neural network uses an unsupervised feature learning algorithm to learn the MTF image features. The third neural network uses both the learned features and the MTF images, which are then processed using a Deep Convolutional Neural Network to generate the flares predictions. To the best of our knowledge, this work is the first flare prediction system that is based entirely on the analysis of pre-flare GOES X-ray flux data. The results are evaluated using several performance measurement criteria that are presented in this paper
Recommended from our members
A Fast and Accurate Iris Localization Technique for Healthcare Security System
yesIn the health care systems, a high security level is
required to protect extremely sensitive patient records. The goal
is to provide a secure access to the right records at the right time
with high patient privacy. As the most accurate biometric system,
the iris recognition can play a significant role in healthcare
applications for accurate patient identification. In this paper, the
corner stone towards building a fast and robust iris recognition
system for healthcare applications is addressed, which is known
as iris localization. Iris localization is an essential step for
efficient iris recognition systems. The presence of extraneous
features such as eyelashes, eyelids, pupil and reflection spots
make the correct iris localization challenging. In this paper, an
efficient and automatic method is presented for the inner and
outer iris boundary localization. The inner pupil boundary is
detected after eliminating specular reflections using a
combination of thresholding and morphological operations.
Then, the outer iris boundary is detected using the modified
Circular Hough transform. An efficient preprocessing procedure
is proposed to enhance the iris boundary by applying 2D
Gaussian filter and Histogram equalization processes. In
addition, the pupil’s parameters (e.g. radius and center
coordinates) are employed to reduce the search time of the
Hough transform by discarding the unnecessary edge points
within the iris region. Finally, a robust and fast eyelids detection
algorithm is developed which employs an anisotropic diffusion
filter with Radon transform to fit the upper and lower eyelids
boundaries. The performance of the proposed method is tested
on two databases: CASIA Version 1.0 and SDUMLA-HMT iris
database. The Experimental results demonstrate the efficiency of
the proposed method. Moreover, a comparative study with other
established methods is also carried out
Recommended from our members
An automatic corneal subbasal nerve registration system using FFT and phase correlation techniques for an accurate DPN diagnosis
yesConfocal microscopy is employed as a fast and non-invasive way to capture a sequence of images from different layers and membranes of the cornea. The captured images are used to extract useful and helpful clinical information for early diagnosis of corneal diseases such as, Diabetic Peripheral Neuropathy (DPN). In this paper, an automatic corneal subbasal nerve registration system is proposed. The main aim of the proposed system is to produce a new informative corneal image that contains structural and functional information. In addition a colour coded corneal image map is produced by overlaying a sequence of Cornea Confocal Microscopy (CCM) images that differ in their displacement, illumination, scaling, and rotation to each other. An automatic image registration method is proposed based on combining the advantages of Fast Fourier Transform (FFT) and phase correlation techniques. The proposed registration algorithm searches for the best common features between a number of sequenced CCM images in the frequency domain to produce the formative image map. In this generated image map, each colour represents the severity level of a specific clinical feature that can be used to give ophthalmologists a clear and precise representation of the extracted clinical features from each nerve in the image map. Moreover, successful implementation of the proposed system and the availability of the required datasets opens the door for other interesting ideas; for instance, it can be used to give ophthalmologists a summarized and objective description about a diabetic patient’s health status using a sequence of CCM images that have been captured from different imaging devices and/or at different time
Recommended from our members
A Robust Face Recognition System Based on Curvelet and Fractal Dimension Transforms
yesn this paper, a powerful face recognition system for authentication and identification tasks is presented and a new facial feature extraction approach is proposed. A novel feature extraction method based on combining the characteristics of the Curvelet transform and Fractal dimension transform is proposed. The proposed system consists of four stages. Firstly, a simple preprocessing algorithm based on a sigmoid function is applied to standardize the intensity dynamic range in the input image. Secondly, a face detection stage based on the Viola-Jones algorithm is used for detecting the face region in the input image. After that, the feature extraction stage using a combination of the Digital Curvelet via wrapping transform and a Fractal Dimension transform is implemented. Finally, the K-Nearest Neighbor (K-NN) and Correlation Coefficient (CC) Classifiers are used in the recognition task. Lastly, the performance of the proposed approach has been tested by carrying out a number of experiments on three well-known datasets with high diversity in the facial expressions: SDUMLA-HMT, Faces96 and UMIST datasets. All the experiments conducted indicate the robustness and the effectiveness of the proposed approach for both authentication and identification tasks compared to other established approaches
Adaptive Deep Learning Detection Model for Multi-Foggy Images
The fog has different features and effects within every single environment. Detection whether there is fog in the image is considered a challenge and giving the type of fog has a substantial enlightening effect on image defogging. Foggy scenes have different types such as scenes based on fog density level and scenes based on fog type. Machine learning techniques have a significant contribution to the detection of foggy scenes. However, most of the existing detection models are based on traditional machine learning models, and only a few studies have adopted deep learning models. Furthermore, most of the existing machines learning detection models are based on fog density-level scenes. However, to the best of our knowledge, there is no such detection model based on multi-fog type scenes have presented yet. Therefore, the main goal of our study is to propose an adaptive deep learning model for the detection of multi-fog types of images. Moreover, due to the lack of a publicly available dataset for inhomogeneous, homogenous, dark, and sky foggy scenes, a dataset for multi-fog scenes is presented in this study (https://github.com/Karrar-H-Abdulkareem/Multi-Fog-Dataset). Experiments were conducted in three stages. First, the data collection phase is based on eight resources to obtain the multi-fog scene dataset. Second, a classification experiment is conducted based on the ResNet-50 deep learning model to obtain detection results. Third, evaluation phase where the performance of the ResNet-50 detection model has been compared against three different models. Experimental results show that the proposed model has presented a stable classification performance for different foggy images with a 96% score for each of Classification Accuracy Rate (CAR), Recall, Precision, F1-Score which has specific theoretical and practical significance. Our proposed model is suitable as a pre-processing step and might be considered in different real-time applications
A multimodal deep learning framework using local feature representations for face recognition
YesThe most recent face recognition systems are
mainly dependent on feature representations obtained using
either local handcrafted-descriptors, such as local binary patterns
(LBP), or use a deep learning approach, such as deep
belief network (DBN). However, the former usually suffers
from the wide variations in face images, while the latter
usually discards the local facial features, which are proven
to be important for face recognition. In this paper, a novel
framework based on merging the advantages of the local
handcrafted feature descriptors with the DBN is proposed to
address the face recognition problem in unconstrained conditions.
Firstly, a novel multimodal local feature extraction
approach based on merging the advantages of the Curvelet
transform with Fractal dimension is proposed and termed
the Curvelet–Fractal approach. The main motivation of this
approach is that theCurvelet transform, a newanisotropic and
multidirectional transform, can efficiently represent themain
structure of the face (e.g., edges and curves), while the Fractal
dimension is one of the most powerful texture descriptors
for face images. Secondly, a novel framework is proposed,
termed the multimodal deep face recognition (MDFR)framework,
to add feature representations by training aDBNon top
of the local feature representations instead of the pixel intensity
representations. We demonstrate that representations acquired by the proposed MDFR framework are complementary
to those acquired by the Curvelet–Fractal approach.
Finally, the performance of the proposed approaches has
been evaluated by conducting a number of extensive experiments
on four large-scale face datasets: the SDUMLA-HMT,
FERET, CAS-PEAL-R1, and LFW databases. The results
obtained from the proposed approaches outperform other
state-of-the-art of approaches (e.g., LBP, DBN, WPCA) by
achieving new state-of-the-art results on all the employed
datasets
A fully automatic nerve segmentation and morphometric parameter quantification system for early diagnosis of diabetic neuropathy in corneal images
Diabetic Peripheral Neuropathy (DPN) is one of the most common types of diabetes that can affect the cornea. An accurate analysis of the nerve structures can assist the early diagnosis of this disease. This paper proposes a robust, fast and fully automatic nerve segmentation and morphometric parameter quantification system for corneal confocal microscope images. The segmentation part consists of three main steps. First, a preprocessing step is applied to enhance the visibility of the nerves and remove noise using anisotropic diffusion filtering, specifically a Coherence filter followed by Gaussian filtering. Second, morphological operations are applied to remove unwanted objects in the input image such as epithelial cells and small nerve segments. Finally, an edge detection step is applied to detect all the nerves in the input image. In this step, an efficient algorithm for connecting discontinuous nerves is proposed. In the morphometric parameters quantification part, a number of features are extracted, including thickness, tortuosity and length of nerve, which may be used for the early diagnosis of diabetic polyneuropathy and when planning Laser-Assisted in situ Keratomileusis (LASIK) or Photorefractive keratectomy (PRK). The performance of the proposed segmentation system is evaluated against manually traced ground-truth images based on a database consisting of 498 corneal sub-basal nerve images (238 are normal and 260 are abnormal). In addition, the robustness and efficiency of the proposed system in extracting morphometric features with clinical utility was evaluated in 919 images taken from healthy subjects and diabetic patients with and without neuropathy. We demonstrate rapid (13 seconds/image), robust and effective automated corneal nerve quantification. The proposed system will be deployed as a useful clinical tool to support the expertise of ophthalmologists and save the clinician time in a busy clinical setting
A multi-biometric iris recognition system based on a deep learning approach
YesMultimodal biometric systems have been widely
applied in many real-world applications due to its ability to
deal with a number of significant limitations of unimodal
biometric systems, including sensitivity to noise, population
coverage, intra-class variability, non-universality, and
vulnerability to spoofing. In this paper, an efficient and
real-time multimodal biometric system is proposed based
on building deep learning representations for images of
both the right and left irises of a person, and fusing the
results obtained using a ranking-level fusion method. The
trained deep learning system proposed is called IrisConvNet
whose architecture is based on a combination of Convolutional
Neural Network (CNN) and Softmax classifier to
extract discriminative features from the input image without
any domain knowledge where the input image represents
the localized iris region and then classify it into one of N
classes. In this work, a discriminative CNN training scheme
based on a combination of back-propagation algorithm and
mini-batch AdaGrad optimization method is proposed for
weights updating and learning rate adaptation, respectively.
In addition, other training strategies (e.g., dropout method,
data augmentation) are also proposed in order to evaluate
different CNN architectures. The performance of the proposed
system is tested on three public datasets collected
under different conditions: SDUMLA-HMT, CASIA-Iris-
V3 Interval and IITD iris databases. The results obtained
from the proposed system outperform other state-of-the-art
of approaches (e.g., Wavelet transform, Scattering transform,
Local Binary Pattern and PCA) by achieving a Rank-1 identification rate of 100% on all the employed databases
and a recognition time less than one second per person
CellsDeepNet: A Novel Deep Learning-Based Web Application for the Automated Morphometric Analysis of Corneal Endothelial Cells
The quantification of corneal endothelial cell (CEC) morphology using manual and semi-automatic software enables an objective assessment of corneal endothelial pathology. However, the procedure is tedious, subjective, and not widely applied in clinical practice. We have developed the CellsDeepNet system to automatically segment and analyse the CEC morphology. The CellsDeepNet system uses Contrast-Limited Adaptive Histogram Equalization (CLAHE) to improve the contrast of the CEC images and reduce the effects of non-uniform image illumination, 2D Double-Density Dual-Tree Complex Wavelet Transform (2DDD-TCWT) to reduce noise, Butterworth Bandpass filter to enhance the CEC edges, and moving average filter to adjust for brightness level. An improved version of U-Net was used to detect the boundaries of the CECs, regardless of the CEC size. CEC morphology was measured as mean cell density (MCD, cell/mm2), mean cell area (MCA, μm2), mean cell perimeter (MCP, μm), polymegathism (coefficient of CEC size variation), and pleomorphism (percentage of hexagonality coefficient). The CellsDeepNet system correlated highly significantly with the manual estimations for MCD (r = 0.94), MCA (r = 0.99), MCP (r = 0.99), polymegathism (r = 0.92), and pleomorphism (r = 0.86), with p < 0.0001 for all the extracted clinical features. The Bland–Altman plots showed excellent agreement. The percentage difference between the manual and automated estimations was superior for the CellsDeepNet system compared to the CEAS system and other state-of-the-art CEC segmentation systems on three large and challenging corneal endothelium image datasets captured using two different ophthalmic devices
CellsDeepNet: A Novel Deep Learning-Based Web Application for the Automated Morphometric Analysis of Corneal Endothelial Cells
The quantification of corneal endothelial cell (CEC) morphology using manual and semi-automatic software enables an objective assessment of corneal endothelial pathology. However, the procedure is tedious, subjective, and not widely applied in clinical practice. We have developed the CellsDeepNet system to automatically segment and analyse the CEC morphology. The CellsDeepNet system uses Contrast-Limited Adaptive Histogram Equalization (CLAHE) to improve the contrast of the CEC images and reduce the effects of non-uniform image illumination, 2D Double-Density Dual-Tree Complex Wavelet Transform (2DDD-TCWT) to reduce noise, Butterworth Bandpass filter to enhance the CEC edges, and moving average filter to adjust for brightness level. An improved version of U-Net was used to detect the boundaries of the CECs, regardless of the CEC size. CEC morphology was measured as mean cell density (MCD, cell/mm2), mean cell area (MCA, μm2), mean cell perimeter (MCP, μm), polymegathism (coefficient of CEC size variation), and pleomorphism (percentage of hexagonality coefficient). The CellsDeepNet system correlated highly significantly with the manual estimations for MCD (r = 0.94), MCA (r = 0.99), MCP (r = 0.99), polymegathism (r = 0.92), and pleomorphism (r = 0.86), with p < 0.0001 for all the extracted clinical features. The Bland–Altman plots showed excellent agreement. The percentage difference between the manual and automated estimations was superior for the CellsDeepNet system compared to the CEAS system and other state-of-the-art CEC segmentation systems on three large and challenging corneal endothelium image datasets captured using two different ophthalmic devices